3 research outputs found

    Number Systems for Deep Neural Network Architectures: A Survey

    Full text link
    Deep neural networks (DNNs) have become an enabling component for a myriad of artificial intelligence applications. DNNs have shown sometimes superior performance, even compared to humans, in cases such as self-driving, health applications, etc. Because of their computational complexity, deploying DNNs in resource-constrained devices still faces many challenges related to computing complexity, energy efficiency, latency, and cost. To this end, several research directions are being pursued by both academia and industry to accelerate and efficiently implement DNNs. One important direction is determining the appropriate data representation for the massive amount of data involved in DNN processing. Using conventional number systems has been found to be sub-optimal for DNNs. Alternatively, a great body of research focuses on exploring suitable number systems. This article aims to provide a comprehensive survey and discussion about alternative number systems for more efficient representations of DNN data. Various number systems (conventional/unconventional) exploited for DNNs are discussed. The impact of these number systems on the performance and hardware design of DNNs is considered. In addition, this paper highlights the challenges associated with each number system and various solutions that are proposed for addressing them. The reader will be able to understand the importance of an efficient number system for DNN, learn about the widely used number systems for DNN, understand the trade-offs between various number systems, and consider various design aspects that affect the impact of number systems on DNN performance. In addition, the recent trends and related research opportunities will be highlightedComment: 28 page

    Optimized power and cell individual offset for cellular load balancing via reinforcement learning

    No full text
    We consider the problem of jointly optimizing the transmission power and cell individual offsets (CIOs) in the downlink of cellular networks using reinforcement learning. To that end, we reformulate the problem as a Markov decision process (MDP). We abstract the cellular network as a state, which comprises of carefully selected key performance indicators (KPIs). We present a novel reward function, namely, the penalized throughput, to reflect the tradeoff between the total throughput of the network and the number of covered users. We employ the twin deep delayed deterministic policy gradient (TD3) technique to learn how to maximize the proposed reward function through the interaction with the cellular network. We assess the proposed technique by simulating an actual cellular network, whose parameters and base station placement are derived from a 4G network operator, using NS-3 and SUMO simulators. Our results show the following: 1) Optimizing one of the controls is significantly inferior to jointly optimizing both controls; 2) our proposed technique achieves 18.4% throughput gain compared with the baseline of fixed transmission power and zero CIOs; 3) there is a tradeoff between the total throughput of the network and the number of covered users

    Deep reinforcement learning-based CIO and energy control for LTE mobility load balancing

    No full text
    cellular networks\u27 congestion has been one of the most common problems in cellular networks due to the huge increase in network load resulted from enhancing communication quality as well as increasing the number of users. Since mobile users are not uniformly distributed in the network, the need for load balancing as a cellular networks\u27 self-optimization technique has increased recently. Then, the congestion problem can be handled by evenly distributing the network load among the network resources. Lots of research has been dedicated to developing load balancing models for cellular networks. Most of these models rely on adjusting the Cell Individual Offset (CIO) parameters which are designed for self-optimization techniques in cellular networks. In this paper, a new deep reinforcement learning-based load balancing approach is proposed as a solution for the LTE Downlink congestion problem. This approach does not rely only on adapting the CIO parameters, but it rather has two degrees of control; the first one is adjusting the CIO parameters, and the second is adjusting the eNodeBs\u27 transmission power. The proposed model uses Double Deep Q-Network (DDQN) to learn how to adjust these parameters so that a better load distribution in the overall network is achieved. Simulation results prove the effectiveness of the proposed approach by improving the network overall throughput by up to 21.4% and 6.5% compared to the base-line scheme and the scheme that only adapts CIOs, respectively
    corecore